Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38656841

RESUMO

The removal of outliers is crucial for establishing correspondence between two images. However, when the proportion of outliers reaches nearly 90%, the task becomes highly challenging. Existing methods face limitations in effectively utilizing geometric transformation consistency (GTC) information and incorporating geometric semantic neighboring information. To address these challenges, we propose a Multi-Stage Geometric Semantic Attention (MSGSA) network. The MSGSA network consists of three key modules: the multi-branch (MB) module, the GTC module, and the geometric semantic attention (GSA) module. The MB module, structured with a multi-branch design, facilitates diverse and robust spatial transformations. The GTC module captures transformation consistency information from the preceding stage. The GSA module categorizes input based on the prior stage's output, enabling efficient extraction of geometric semantic information through a graph-based representation and inter-category information interaction using Transformer. Extensive experiments on the YFCC100M and SUN3D datasets demonstrate that MSGSA outperforms current state-of-the-art methods in outlier removal and camera pose estimation, particularly in scenarios with a high prevalence of outliers. Source code is available at https://github.com/shuyuanlin.

2.
IEEE Trans Image Process ; 33: 2293-2304, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38470591

RESUMO

Human emotions contain both basic and compound facial expressions. In many practical scenarios, it is difficult to access all the compound expression categories at one time. In this paper, we investigate comprehensive facial expression recognition (FER) in the class-incremental learning paradigm, where we define well-studied and easily-accessible basic expressions as initial classes and learn new compound expressions incrementally. To alleviate the stability-plasticity dilemma in our incremental task, we propose a novel Relationship-Guided Knowledge Transfer (RGKT) method for class-incremental FER. Specifically, we develop a multi-region feature learning (MFL) module to extract fine-grained features for capturing subtle differences in expressions. Based on the MFL module, we further design a basic expression-oriented knowledge transfer (BET) module and a compound expression-oriented knowledge transfer (CET) module, by effectively exploiting the relationship across expressions. The BET module initializes the new compound expression classifiers based on expression relevance between basic and compound expressions, improving the plasticity of our model to learn new classes. The CET module transfers expression-generic knowledge learned from new compound expressions to enrich the feature set of old expressions, facilitating the stability of our model against forgetting old classes. Extensive experiments on three facial expression databases show that our method achieves superior performance in comparison with several state-of-the-art methods.


Assuntos
Reconhecimento Facial , Humanos , Emoções , Aprendizagem , Expressão Facial , Bases de Dados Factuais
3.
IEEE Trans Image Process ; 33: 1257-1271, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38252570

RESUMO

Few-shot action recognition aims to recognize new unseen categories with only a few labeled samples of each class. However, it still suffers from the limitation of inadequate data, which easily leads to the overfitting and low-generalization problems. Therefore, we propose a cross-modal contrastive learning network (CCLN), consisting of an adversarial branch and a contrastive branch, to perform effective few-shot action recognition. In the adversarial branch, we elaborately design a prototypical generative adversarial network (PGAN) to obtain synthesized samples for increasing training samples, which can mitigate the data scarcity problem and thereby alleviate the overfitting problem. When the training samples are limited, the obtained visual features are usually suboptimal for video understanding as they lack discriminative information. To address this issue, in the contrastive branch, we propose a cross-modal contrastive learning module (CCLM) to obtain discriminative feature representations of samples with the help of semantic information, which can enable the network to enhance the feature learning ability at the class-level. Moreover, since videos contain crucial sequences and ordering information, thus we introduce a spatial-temporal enhancement module (SEM) to model the spatial context within video frames and the temporal context across video frames. The experimental results show that the proposed CCLN outperforms the state-of-the-art few-shot action recognition methods on four challenging benchmarks, including Kinetics, UCF101, HMDB51 and SSv2.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38019631

RESUMO

Knowledge distillation (KD), which aims at transferring the knowledge from a complex network (a teacher) to a simpler and smaller network (a student), has received considerable attention in recent years. Typically, most existing KD methods work on well-labeled data. Unfortunately, real-world data often inevitably involve noisy labels, thus leading to performance deterioration of these methods. In this article, we study a little-explored but important issue, i.e., KD with noisy labels. To this end, we propose a novel KD method, called ambiguity-guided mutual label refinery KD (AML-KD), to train the student model in the presence of noisy labels. Specifically, based on the pretrained teacher model, a two-stage label refinery framework is innovatively introduced to refine labels gradually. In the first stage, we perform label propagation (LP) with small-loss selection guided by the teacher model, improving the learning capability of the student model. In the second stage, we perform mutual LP between the teacher and student models in a mutual-benefit way. During the label refinery, an ambiguity-aware weight estimation (AWE) module is developed to address the problem of ambiguous samples, avoiding overfitting these samples. One distinct advantage of AML-KD is that it is capable of learning a high-accuracy and low-cost student model with label noise. The experimental results on synthetic and real-world noisy datasets show the effectiveness of our AML-KD against state-of-the-art KD methods and label noise learning (LNL) methods. Code is available at https://github.com/Runqing-forMost/ AML-KD.

5.
IEEE Trans Image Process ; 32: 4128-4141, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37432827

RESUMO

Video object detection is a fundamental and important task in computer vision. One mainstay solution for this task is to aggregate features from different frames to enhance the detection on the current frame. Off-the-shelf feature aggregation paradigms for video object detection typically rely on inferring feature-to-feature (Fea2Fea) relations. However, most existing methods are unable to stably estimate Fea2Fea relations due to the appearance deterioration caused by object occlusion, motion blur or rare poses, resulting in limited detection performance. In this paper, we study Fea2Fea relations from a new perspective, and propose a novel dual-level graph relation network (DGRNet) for high-performance video object detection. Different from previous methods, our DGRNet innovatively leverages the residual graph convolutional network to simultaneously model Fea2Fea relations at two different levels including frame level and proposal level, which facilitates performing better feature aggregation in the temporal domain. To prune unreliable edge connections in the graph, we introduce a node topology affinity measure to adaptively evolve the graph structure by mining the local topological information of pairwise nodes. To the best of our knowledge, our DGRNet is the first video object detection method that leverages dual-level graph relations to guide feature aggregation. We conduct experiments on the ImageNet VID dataset and the results demonstrate the superiority of our DGRNet against state-of-the-art methods. Especially, our DGRNet achieves 85.0% mAP and 86.2% mAP with ResNet-101 and ResNeXt-101, respectively.

6.
IEEE Trans Cybern ; 53(11): 7071-7084, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35604981

RESUMO

Person attribute recognition (PAR) aims to simultaneously predict multiple attributes of a person. Existing deep learning-based PAR methods have achieved impressive performance. Unfortunately, these methods usually ignore the fact that different attributes have an imbalance in the number of noisy-labeled samples in the PAR training datasets, thus leading to suboptimal performance. To address the above problem of imbalanced noisy-labeled samples, we propose a novel and effective loss called drop loss for PAR. In the drop loss, the attributes are treated differently in an easy-to-hard way. In particular, the noisy-labeled candidates, which are identified according to their gradient norms, are dropped with a higher drop rate for the harder attribute. Such a manner adaptively alleviates the adverse effect of imbalanced noisy-labeled samples on model learning. To illustrate the effectiveness of the proposed loss, we train a simple ResNet-50 model based on the drop loss and term it DropNet. Experimental results on two representative PAR tasks (including facial attribute recognition and pedestrian attribute recognition) demonstrate that the proposed DropNet achieves comparable or better performance in terms of both balanced accuracy and classification accuracy over several state-of-the-art PAR methods.

7.
IEEE Trans Image Process ; 31: 6605-6620, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36256709

RESUMO

Recently, graph-based methods have been widely applied to model fitting. However, in these methods, association information is invariably lost when data points and model hypotheses are mapped to the graph domain. In this paper, we propose a novel model fitting method based on co-clustering on bipartite graphs (CBG) to estimate multiple model instances in data contaminated with outliers and noise. Model fitting is reformulated as a bipartite graph partition behavior. Specifically, we use a bipartite graph reduction technique to eliminate some insignificant vertices (outliers and invalid model hypotheses), thereby improving the reliability of the constructed bipartite graph and reducing the computational complexity. We then use a co-clustering algorithm to learn a structured optimal bipartite graph with exact connected components for partitioning that can directly estimate the model instances (i.e., post-processing steps are not required). The proposed method fully utilizes the duality of data points and model hypotheses on bipartite graphs, leading to superior fitting performance. Exhaustive experiments show that the proposed CBG method performs favorably when compared with several state-of-the-art fitting methods.

8.
Artigo em Inglês | MEDLINE | ID: mdl-35834450

RESUMO

Recent methods in network pruning have indicated that a dense neural network involves a sparse subnetwork (called a winning ticket), which can achieve similar test accuracy to its dense counterpart with much fewer network parameters. Generally, these methods search for the winning tickets on well-labeled data. Unfortunately, in many real-world applications, the training data are unavoidably contaminated with noisy labels, thereby leading to performance deterioration of these methods. To address the above-mentioned problem, we propose a novel two-stream sample selection network (TS 3 -Net), which consists of a sparse subnetwork and a dense subnetwork, to effectively identify the winning ticket with noisy labels. The training of TS 3 -Net contains an iterative procedure that switches between training both subnetworks and pruning the smallest magnitude weights of the sparse subnetwork. In particular, we develop a multistage learning framework including a warm-up stage, a semisupervised alternate learning stage, and a label refinement stage, to progressively train the two subnetworks. In this way, the classification capability of the sparse subnetwork can be gradually improved at a high sparsity level. Extensive experimental results on both synthetic and real-world noisy datasets (including MNIST, CIFAR-10, CIFAR-100, ANIMAL-10N, Clothing1M, and WebVision) demonstrate that our proposed method achieves state-of-the-art performance with very small memory consumption for label noise learning. Code is available at https://github.com/Runqing-forMost/TS3-Net/tree/master.

9.
IEEE Trans Image Process ; 31: 2529-2540, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35275820

RESUMO

The explanation for deep neural networks has drawn extensive attention in the deep learning community over the past few years. In this work, we study the visual saliency, a.k.a. visual explanation, to interpret convolutional neural networks. Compared to iteration based saliency methods, single backward pass based saliency methods benefit from faster speed, and they are widely used in downstream visual tasks. Thus, we focus on single backward pass based methods. However, existing methods in this category struggle to successfully produce fine-grained saliency maps concentrating on specific target classes. That said, producing faithful saliency maps satisfying both target-selectiveness and fine-grainedness using a single backward pass is a challenging problem in the field. To mitigate this problem, we revisit the gradient flow inside the network, and find that the entangled semantics and original weights may disturb the propagation of target-relevant saliency. Inspired by those observations, we propose a novel visual saliency method, termed Target-Selective Gradient Backprop (TSGB), which leverages rectification operations to effectively emphasize target classes and further efficiently propagate the saliency to the image space, thereby generating target-selective and fine-grained saliency maps. The proposed TSGB consists of two components, namely, TSGB-Conv and TSGB-FC, which rectify the gradients for convolutional layers and fully-connected layers, respectively. Extensive qualitative and quantitative experiments on the ImageNet and Pascal VOC datasets show that the proposed method achieves more accurate and reliable results than the other competitive methods. Code is available at https://github.com/123fxdx/CNNvisualizationTSGB.


Assuntos
Atenção , Redes Neurais de Computação , Semântica
10.
IEEE Trans Cybern ; 50(7): 3294-3306, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30843859

RESUMO

In this paper, a new robust model fitting method is proposed to efficiently segment multistructure data even when they are heavily contaminated by outliers. The proposed method is composed of three steps: first, a conventional greedy search strategy is employed to generate (initial) model hypotheses based on the sequential "fit-and-remove" procedure because of its computational efficiency. Second, to efficiently generate accurate model hypotheses close to the true models, a novel global greedy search strategy initially samples from the inliers of the obtained model hypotheses and samples subsequent data subsets from the whole input data. Third, mutual information theory is applied to fuse the model hypotheses of the same model instance. The conventional greedy search strategy is used to generate model hypotheses for the remaining model instances, if the number of retained model hypotheses is less than that of the true model instances after fusion. The second and the third steps are performed iteratively until an adequate solution is obtained. Experimental results demonstrate the effectiveness and efficiency of the proposed method for model fitting.

11.
IEEE Trans Cybern ; 50(10): 4530-4543, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30640643

RESUMO

The performance of many robust model fitting techniques is largely dependent on the quality of the generated hypotheses. In this paper, we propose a novel guided sampling method, called accelerated guided sampling (AGS), to efficiently generate the accurate hypotheses for multistructure model fitting. Based on the observations that residual sorting can effectively reveal the data relationship (i.e., determine whether two data points belong to the same structure), and keypoint matching scores can be used to distinguish inliers from gross outliers, AGS effectively combines the benefits of residual sorting and keypoint matching scores to efficiently generate accurate hypotheses via information theoretic principles. Moreover, we reduce the computational cost of residual sorting in AGS by designing a new residual sorting strategy, which only sorts the top-ranked residuals of input data, rather than all input data. Experimental results demonstrate the effectiveness of the proposed method in computer vision tasks, such as homography matrix and fundamental matrix estimation.

12.
Artigo em Inglês | MEDLINE | ID: mdl-30629503

RESUMO

Single image fog removal is important for surveillance applications and many defogging methods have been proposed, recently. Due to the adverse atmospheric conditions, the scattering properties of foggy images depend on not only the depth information of scene, but also the atmospheric aerosol model, which has more prominent influence on illumination in a fog scene than that in a haze scene. However, recent defogging methods confuse haze and fog, and they fail to consider fully about the scattering properties. Thus, these methods are not sufficient to remove fog effects, especially for images in maritime surveillance. Therefore, this paper proposes a single image defogging method for visual maritime surveillance. Firstly, a comprehensive scattering model is proposed to formulate a fog image in the glow-shaped environmental illumination. Then, an illumination decomposition algorithm is proposed to eliminate the glow effect on the airlight radiance and recover a fog layer, in which objects at the infinite distance have uniform luminance. Secondly, a transmission-map estimation based on the non-local haze-lines prior is utilized to constrain the transmission map into a reasonable range for the input fog image. Finally, the proposed illumination compensation algorithm enables the defogging image to preserve the natural illumination information of the input image. In addition, a fog image dataset is established for visual maritime surveillance. The experimental results based on the established dataset demonstrate that the proposed method can outperform the state-of-the-art methods in terms of both the subjective and objective evaluation criteria. Moreover, the proposed method can effectively remove fog and maintain naturalness for fog images.

13.
IEEE Trans Pattern Anal Mach Intell ; 41(3): 697-711, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29994506

RESUMO

In this paper, we propose a simple and effective geometric model fitting method to fit and segment multi-structure data even in the presence of severe outliers. We cast the task of geometric model fitting as a representative mode-seeking problem on hypergraphs. Specifically, a hypergraph is first constructed, where the vertices represent model hypotheses and the hyperedges denote data points. The hypergraph involves higher-order similarities (instead of pairwise similarities used on a simple graph), and it can characterize complex relationships between model hypotheses and data points. In addition, we develop a hypergraph reduction technique to remove "insignificant" vertices while retaining as many "significant" vertices as possible in the hypergraph. Based on the simplified hypergraph, we then propose a novel mode-seeking algorithm to search for representative modes within reasonable time. Finally, the proposed mode-seeking algorithm detects modes according to two key elements, i.e., the weighting scores of vertices and the similarity analysis between vertices. Overall, the proposed fitting method is able to efficiently and effectively estimate the number and the parameters of model instances in the data simultaneously. Experimental results demonstrate that the proposed method achieves significant superiority over several state-of-the-art model fitting methods on both synthetic data and real images.

14.
IEEE Trans Cybern ; 48(3): 862-875, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28222007

RESUMO

Color and intensity are two important components in an image. Usually, groups of image pixels, which are similar in color or intensity, are an informative representation for an object. They are therefore particularly suitable for computer vision tasks, such as saliency detection and object proposal generation. However, image pixels, which share a similar real-world color, may be quite different since colors are often distorted by intensity. In this paper, we reinvestigate the affinity matrices originally used in image segmentation methods based on spectral clustering. A new affinity matrix, which is robust to color distortions, is formulated for object discovery. Moreover, a cohesion measurement (CM) for object regions is also derived based on the formulated affinity matrix. Based on the new CM, a novel object discovery method is proposed to discover objects latent in an image by utilizing the eigenvectors of the affinity matrix. Then we apply the proposed method to both saliency detection and object proposal generation. Experimental results on several evaluation benchmarks demonstrate that the proposed CM-based method has achieved promising performance for these two tasks.

15.
IEEE Trans Image Process ; 23(8): 3522-34, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24951690

RESUMO

We propose an efficient approach to semidefinite spectral clustering (SSC), which addresses the Frobenius normalization with the positive semidefinite (p.s.d.) constraint for spectral clustering. Compared with the original Frobenius norm approximation-based algorithm, the proposed algorithm can more accurately find the closest doubly stochastic approximation to the affinity matrix by considering the p.s.d. constraint. In this paper, SSC is formulated as a semidefinite programming (SDP) problem. In order to solve the high computational complexity of SDP, we present a dual algorithm based on the Lagrange dual formalization. Two versions of the proposed algorithm are proffered: one with less memory usage and the other with faster convergence rate. The proposed algorithm has much lower time complexity than that of the standard interior-point-based SDP solvers. Experimental results on both the UCI data sets and real-world image data sets demonstrate that: 1) compared with the state-of-the-art spectral clustering methods, the proposed algorithm achieves better clustering performance and 2) our algorithm is much more efficient and can solve larger-scale SSC problems than those standard interior-point SDP solvers.

16.
IEEE Trans Image Process ; 22(8): 3028-40, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23529089

RESUMO

A key problem in visual tracking is how to effectively combine spatio-temporal visual information from throughout a video to accurately estimate the state of an object. We address this problem by incorporating Dempster-Shafer (DS) information fusion into the tracking approach. To implement this fusion task, the entire image sequence is partitioned into spatially and temporally adjacent subsequences. A support vector machine (SVM) classifier is trained for object/nonobject classification on each of these subsequences, the outputs of which act as separate data sources. To combine the discriminative information from these classifiers, we further present a spatio-temporal weighted DS (STWDS) scheme. In addition, temporally adjacent sources are likely to share discriminative information on object/nonobject classification. To use such information, an adaptive SVM learning scheme is designed to transfer discriminative information across sources. Finally, the corresponding DS belief function of the STWDS scheme is embedded into a Bayesian tracking model. Experimental results on challenging videos demonstrate the effectiveness and robustness of the proposed tracking approach.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Inteligência Artificial , Teorema de Bayes , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Análise Espaço-Temporal
17.
IEEE Trans Pattern Anal Mach Intell ; 35(4): 863-81, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22868649

RESUMO

Visual tracking usually requires an object appearance model that is robust to changing illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted, and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento Tridimensional/métodos , Gravação em Vídeo/métodos , Face/anatomia & histologia , Humanos , Modelos Teóricos
18.
IEEE Trans Med Imaging ; 31(4): 963-76, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22113772

RESUMO

Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Today's navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce 3D points, and then registers the reconstructed point cloud to a surface segmented from preoperative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the 2D-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves submillimeter (0.70 mm mean) target registration error (TRE) results.


Assuntos
Cirurgia Endoscópica por Orifício Natural/métodos , Base do Crânio/cirurgia , Terapia Assistida por Computador/métodos , Gravação em Vídeo/métodos , Algoritmos , Simulação por Computador , Humanos , Cavidade Nasal , Imagens de Fantasmas , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
19.
IEEE Trans Pattern Anal Mach Intell ; 34(6): 1177-92, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22064800

RESUMO

We propose a robust fitting framework, called Adaptive Kernel-Scale Weighted Hypotheses (AKSWH), to segment multiple-structure data even in the presence of a large number of outliers. Our framework contains a novel scale estimator called Iterative Kth Ordered Scale Estimator (IKOSE). IKOSE can accurately estimate the scale of inliers for heavily corrupted multiple-structure data and is of interest by itself since it can be used in other robust estimators. In addition to IKOSE, our framework includes several original elements based on the weighting, clustering, and fusing of hypotheses. AKSWH can provide accurate estimates of the number of model instances and the parameters and the scale of each model instance simultaneously. We demonstrate good performance in practical applications such as line fitting, circle fitting, range image segmentation, homography estimation, and two--view-based motion segmentation, using both synthetic data and real images.

20.
IEEE Trans Pattern Anal Mach Intell ; 34(4): 825-32, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22156096

RESUMO

It has been shown that the Universum data, which do not belong to either class of the classification problem of interest, may contain useful prior domain knowledge for training a classifier [1], [2]. In this work, we design a novel boosting algorithm that takes advantage of the available Universum data, hence the name UBoost. UBoost is a boosting implementation of Vapnik's alternative capacity concept to the large margin approach. In addition to the standard regularization term, UBoost also controls the learned model's capacity by maximizing the number of observed contradictions. Our experiments demonstrate that UBoost can deliver improved classification accuracy over standard boosting algorithms that use labeled data alone.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...